13 research outputs found

    Incremental hopping-window pose-graph fusion for real-time vehicle localization

    No full text
    In this work, we research and evaluate incremental hopping-window pose-graph fusion strategies for vehicle localization. Pose-graphs can model multiple absolute and relative vehicle localization sensors, and can be optimized using non-linear techniques. We focus on the performance of incremental hopping-window optimization for on-line usage in vehicles and compare it with global off-line optimization. Our evaluation is based on 180 Km long vehicle trajectories that are recorded in highway, urban, and rural areas, and that are accompanied with post-processed Real Time Kinematic GNSS as ground truth. The results exhibit a 17% reduction in the error's standard deviation and a significant reduction in GNSS outliers when compared with automotive-grade GNSS receivers. The incremental hopping-window pose-graph optimization bounds the computation cost, when compared to global pose-graph fusion, which increases linearly with the size of the pose-graph, whereas the difference in accuracy is only 1%. This allows real-time usage of non-linear pose-graph fusion for vehicle localization

    An experimental study on relative and absolute pose graph fusion for vehicle localization

    No full text
    In this work, we research and evaluate multiple pose-graph fusion strategies for vehicle localization. We focus on fusing a single absolute localization system, i.e. automotivegrade Global Navigation Satellite System (GNSS) at 1 Hertz, with a single relative localization system, i.e. vehicle odometry at 25 Hertz. Our evaluation is based on 180 Km long vehicle trajectories that are recorded in highway, urban and rural areas, and that are accompanied with post-processed Real Time Kinematic GNSS as ground truth. The results exhibit a significant reduction in the error's standard deviation by 18% but the bias in the error is unchanged, when compared to non-fused GNSS. We show that the underlying principle is the fact that errors in GNSS readings are highly correlated in time. This causes a bias that cannot be compensated for by using the relative localization information from the odometry, but it can reduce the standard deviation of the error

    A stereo perception framework for autonomous vehicles

    No full text
    Stereo cameras are crucial sensors for self-driving vehicles as they are low-cost and can be used to estimate depth. It can be used for multiple purposes, such as object detection, depth estimation, semantic segmentation, etc. In this paper, we propose a stereo vision-based perception framework for autonomous vehicles. It uses three deep neural networks simultaneously to perform free-space detection, lane boundary detection, and object detection on image frames captured using the stereo camera. The depth of the detected objects from the vehicle is estimated from the disparity image computed using two stereo image frames from the stereo camera. The proposed stereo perception framework runs at 7.4 Hz on the Nvidia Drive PX 2 hardware platform, which further allows for its use in multi-sensor fusion for localization, mapping, and path planning by autonomous vehicle applications

    Pose-graph based Crowdsourced Mapping Framework

    No full text
    Autonomous vehicles are dependent on High Definition (HD) maps. The process of generating and updating these maps is slow, expensive, and not scalable for the whole world. Crowdsourcing vehicle sensor data to generate and update maps is a solution to the problem. In this paper, we propose and evaluate an end-to-end pose-graph optimization-based mapping framework using crowdsourced vehicle data. The in-vehicle data acquisition framework and the cloud-based mapping framework that fuses data from a consumer-grade Global Navigation Satellite System (GNSS) receiver, an odometry sensor, and a stereo camera is described in detail. We focus on using stereo image pairs for loop-closure detection to combine crowdsourced data from different sessions that are affected by GNSS biases. We evaluate our framework on a data-set of more than 180 km recorded around the Eindhoven area. After the map generation process, the results exhibit a 56.23% improvement in maximum offset error and a 24.39% improvement in precision around the loop-closure area

    Real-Time Vehicle Positioning and Mapping Using Graph Optimization

    Get PDF
    In this work, we propose and evaluate a pose-graph optimization-based real-time multi-sensor fusion framework for vehicle positioning using low-cost automotive-grade sensors. Pose-graphs can model multiple absolute and relative vehicle positioning sensor measurements and can be optimized using nonlinear techniques. We model pose-graphs using measurements from a precise stereo camera-based visual odometry system, a robust odometry system using the in-vehicle velocity and yaw-rate sensor, and an automotive-grade GNSS receiver. Our evaluation is based on a dataset with 180 km of vehicle trajectories recorded in highway, urban, and rural areas, accompanied by postprocessed Real-Time Kinematic GNSS as ground truth. We compare the architecture’s performance with (i) vehicle odometry and GNSS fusion and (ii) stereo visual odometry, vehicle odometry, and GNSS fusion; for offline and real-time optimization strategies. The results exhibit a 20.86% reduction in the localization error’s standard deviation and a significant reduction in outliers when compared with automotive-grade GNSS receivers

    Architecture Design and Development of an On-board Stereo Vision System for Cooperative Automated Vehicles

    No full text
    In a cooperative automated driving scenario like platooning, the ego vehicle needs reliable and accurate perception capabilities to autonomously follow the lead vehicle. This paper presents the architecture design and development of an on-board stereo vision system for cooperative automated vehicles. The input to the proposed system is stereo image pairs. It uses three deep neural networks to detect and classify objects, lane markings, and free space boundary simultaneously in front of the ego vehicle. The rectified left and right image frames of the stereo camera are used to compute a disparity map to estimate the detected object’s depth and radial distance. It also estimates the object’s relative velocity, azimuth, and elevation angle with respect to the ego vehicle. It sends the perceived information to the vehicle control system and displays the perceived information in a meaningful way on the human-machine interface. The system runs on both PC (x86_64 architecture) with Nvidia GPU, and the Nvidia Drive PX 2 (aarch64 architecture) automotive-grade compute platform. It is deployed and evaluated on Renault Twizy cooperative automated driving research platform. The presented results show that the stereo vision system works in real-time and is useful for cooperative automated vehicles

    A SysML-based Design and Development of Stereo Vision System with Pose and Velocity Estimation for Cooperative Automated Vehicles

    Get PDF
    Cooperative automated vehicles must perceive the environment accurately and have precise information about the leading vehicle's pose and velocity. This paper presents a SysML-based approach to design and develop a stereo vision-based perception system in an urban platooning scenario. It detects objects, lane markers, and free space in front of the follower vehicle using deep neural networks and computes the lead vehicle's relative pose and velocity. The relative pose is estimated using a geometric model-based pose estimation algorithm. The relative velocity is estimated from the change in pose within a known time. The lead vehicle's relative pose and velocity control the follower vehicle to follow the lead vehicle autonomously. Also, it displays the lead vehicle's pose and velocity information in a meaningful way on the in-vehicle display of the follower vehicle. The proposed system uses a custom-built automotive-grade stereo camera as input and runs on an automotive-grade embedded platform. It is tested on a simulation and prototype cooperative automated vehicle research platform. The evaluation results demonstrate that the proposed system operates in real-time and is suitable for cooperative automated vehicles

    Object detection, lane detection, and free space detection

    No full text
    In this paper, we present a deep neural network based real-time integrated framework to detect objects, lane markings, and drivable space using a monocular camera for advanced driver assistance systems. The object detection framework detects and tracks objects on the road such as cars, trucks, pedestrians, bicycles, motorcycles, and traffic signs. The lane detection framework identifies the different lane markings on the road and also distinguishes between the ego lane and adjacent lane boundaries. The free space detection framework estimates the drivable space in front of the vehicle. In our integrated framework, we propose a pipeline combining the three deep neural networks into a single framework, for object detection, lane detection, and free space detection simultaneously. The integrated framework is implemented in C++ and runs real-time on the Nvidia's Drive PX 2 platform
    corecore